Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism. By revisiting the self-attention responses in Transformers, we empirically observe two interesting issues. First, Vision Transformers present a queryirrelevant behavior at deep layers, where the attention maps exhibit nearly consistent contexts in global scope, regardless of the query patch position (also head-irrelevant). Second, the attention maps are intrinsically sparse, few tokens dominate the attention weights; introducing the knowledge from ConvNets would largely smooth the attention and enhance the performance. Motivated by above observations, we generalize self-attention formulation to abstract a queryirrelevant global context directly and further integrate the global context into convolutions. The resulting model, a Fully Convolutional Vision Transformer (i.e., FCViT), purely consists of convolutional layers and firmly inherits the merits of both attention mechanism and convolutions, including dynamic property, weight sharing, and short- and long-range feature modeling, etc. Experimental results demonstrate the effectiveness of FCViT. With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still perform better than previous state-of-the-art ConvNeXt with even fewer parameters. FCViT-based models also demonstrate promising transferability to downstream tasks, like object detection, instance segmentation, and semantic segmentation. Codes and models are made available at: https://github.com/ma-xu/FCViT.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering. Due to the technical difficulty, one can only obtain rough 3D models (R3DMs) for most real objects using existing 3D reconstruction techniques. As a result, physically-based rendering (PBR) would render low-quality images or videos for scenes that are constructed by R3DMs. One promising solution would be representing real-world objects as Neural Fields such as NeRFs, which are able to generate photo-realistic renderings of an object under desired viewpoints. However, a drawback is that the synthesized views through Neural Fields Rendering (NFR) cannot reflect the simulated lighting details on R3DMs in PBR pipelines, especially when object interactions in the 3D scene creation cause local shadows. To solve this dilemma, we propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other. LighTNet reasons about a simplified image composition model, remedies the uneven surface issue caused by R3DMs, and is empowered by several perceptual-motivated constraints and a new Lab angle loss which enhances the contrast between lighting strength and colors. Comparisons demonstrate that LighTNet is superior in synthesizing impressive lighting, and is promising in pushing NFR further in practical 3D modeling workflows. Project page: https://3d-front-future.github.io/LighTNet .
translated by 谷歌翻译
Low-light stereo image enhancement (LLSIE) is a relatively new task to enhance the quality of visually unpleasant stereo images captured in dark conditions. So far, very few studies on deep LLSIE have been explored due to certain challenging issues, i.e., the task has not been well addressed, and current methods clearly suffer from two shortages: 1) insufficient cross-view interaction; 2) lacking long-range dependency for intra-view learning. In this paper, we therefore propose a novel LLSIE model, termed \underline{Suf}ficient C\underline{r}oss-View \underline{In}teraction Network (SufrinNet). To be specific, we present sufficient inter-view interaction module (SIIM) to enhance the information exchange across views. SIIM not only discovers the cross-view correlations at different scales, but also explores the cross-scale information interaction. Besides, we present a spatial-channel information mining block (SIMB) for intra-view feature extraction, and the benefits are twofold. One is the long-range dependency capture to build spatial long-range relationship, and the other is expanded channel information refinement that enhances information flow in channel dimension. Extensive experiments on Flickr1024, KITTI 2012, KITTI 2015 and Middlebury datasets show that our method obtains better illumination adjustment and detail recovery, and achieves SOTA performance compared to other related methods. Our codes, datasets and models will be publicly available.
translated by 谷歌翻译
磁共振图像(MRI)中的脑肿瘤分割(BTS)对于脑肿瘤诊断,癌症管理和研究目的至关重要。随着十年小型挑战的巨大成功以及CNN和Transformer算法的进步,已经提出了许多出色的BTS模型来解决BTS在不同技术方面的困难。但是,现有研究几乎没有考虑如何以合理的方式融合多模式图像。在本文中,我们利用了放射科医生如何从多种MRI模态诊断脑肿瘤的临床知识,并提出了一种称为CKD-TRANSBTS的临床知识驱动的脑肿瘤分割模型。我们没有直接串联所有模式,而是通过根据MRI的成像原理将输入方式分为两组来重新组织输入方式。具有拟议模态相关的跨意义块(MCCA)的双支支混合式编码器旨在提取多模式图像特征。所提出的模型以局部特征表示能力的能力来继承来自变压器和CNN的强度,以提供精确的病变边界和3D体积图像的远程特征提取。为了弥合变压器和CNN功能之间的间隙,我们提出了解码器中的反式和CNN功能校准块(TCFC)。我们将提出的模型与五个基于CNN的模型和六个基于Transformer的模型在Brats 2021挑战数据集上进行了比较。广泛的实验表明,与所有竞争对手相比,所提出的模型可实现最先进的脑肿瘤分割性能。
translated by 谷歌翻译
针对OGB图分类任务中的两个分子图数据集和一个蛋白质关联子图数据集,我们通过引入PAS(池架构搜索)设计一个图形神经网络框架,用于图形分类任务。同时,我们根据GNN拓扑设计方法F2GNN进行改进GNN培训。最后,在这三个数据集上实现了性能突破,这比具有固定聚合功能的其他方法要好得多。事实证明,NAS方法具有多个任务的高概括能力以及我们在处理图形属性预测任务方面的优势。
translated by 谷歌翻译
双链DNA断裂(DSB)是一种DNA损伤的形式,可导致异常染色体重排。基于高吞吐量实验的最近技术具有明显的高成本和技术挑战。因此,我们设计了一种基于图形的神经网络的方法来预测DSB(GraphDSB),使用DNA序列特征和染色体结构信息。为了提高模型的表达能力,我们引入跳跃知识架构和几种有效的结构编码方法。结构信息对DSB预测的贡献是通过来自正常人体表皮角蛋白细胞(NHEK)和慢性髓性白血病细胞系(K562)的数据集的实验验证,并且消融研究进一步证明了所提出的设计部件的有效性GraphDSB框架。最后,我们使用GNNExplainer分析节点特征和拓扑到DSB预测的贡献,并证明了5-MER DNA序列特征和两种染色质相互作用模式的高贡献。
translated by 谷歌翻译
近年来,图形神经网络(GNNS)在不同的现实应用中表现出卓越的性能。为了提高模型容量,除了设计聚合运作,GNN拓扑设计也非常重要。一般来说,有两个主流GNN拓扑设计方式。第一个是堆叠聚合操作以获得更高级别的功能,但随着网络更深的方式,易于进行性能下降。其次,在每个层中使用多聚合操作,该层在本地邻居提供足够和独立的特征提取阶段,同时获得更高级别的信息昂贵。为了享受减轻这两个方式的相应缺陷的同时享受福利,我们学会在一个新颖的特征融合透视中设计GNN的拓扑,这些融合透视中被称为F $ ^ 2 $ GNN。具体而言,我们在设计GNN拓扑中提供了一个特征融合视角,提出了一种新颖的框架,以统一现有的拓扑设计,具有特征选择和融合策略。然后,我们在统一框架之上开发一个神经结构搜索方法,该方法包含在搜索空间中的一组选择和融合操作以及改进的可微分搜索算法。八个现实数据集的性能增益展示了F $ ^ 2 $ GNN的有效性。我们进一步开展实验,以证明F $ ^ 2 $ GNN可以通过自适应使用不同程度的特征来缓解现有GNN拓扑设计方式的缺陷,同时提高模型容量,同时减轻了现有的GNN拓扑设计方式的缺陷,特别是缓解过平滑问题。
translated by 谷歌翻译
近年来,图形神经网络(GNNS)在现实世界数据集上对不同应用的不同应用表现出卓越的性能。为了提高模型能力并减轻过平滑问题,提出了几种方法通过层面连接来掺入中间层。但是,由于具有高度多样化的图形类型,现有方法的性能因不同的图形而异,导致需要数据特定的层面连接方法。为了解决这个问题,我们提出了一种基于神经结构搜索(NAS)的新颖框架LLC(学习层面连接),以学习GNN中中间层之间的自适应连接。 LLC包含一个新颖的搜索空间,由3种类型的块和学习连接以及一个可分辨率搜索过程组成,以实现有效的搜索过程。对五个现实数据集进行了广泛的实验,结果表明,搜索的层面连接不仅可以提高性能,而且还可以缓解过平滑的问题。
translated by 谷歌翻译
通过捕获来自宽频率范围的光谱数据以及空间信息,高光谱成像(HSI)可以检测温度,水分和化学成分方面的微小差异。因此,HSI已成功应用于各种应用,包括遥感安全和防御,植被和作物监测,食品/饮料和药品质量控制的精密农业。然而,对于碳纤维增强聚合物(CFRP)中的病症监测和损伤检测,HSI的使用是一个相对未受破坏的区域,因为现有的非破坏性测试(NDT)技术主要集中在提供有关结构的物理完整性但不对的信息材料组成。为此,HSI可以提供一种独特的方法来解决这一挑战。在本文中,通过使用近红外HSI相机,介绍了HSI对CFRP产品的非破坏性检查的应用,以EU H2020 FibreeUSE项目为背景。详细介绍了三种案例研究的技术挑战和解决方案,包括粘合剂残留检测,表面损伤检测和基于COBOT的自动检查。实验结果充分展示了HSI的巨大潜力和CFRP的NDT的相关视觉技术,特别是满足工业制造环境的潜力。
translated by 谷歌翻译